CN110520865A - The method, apparatus and electronic equipment of recognition of face - Google Patents

The method, apparatus and electronic equipment of recognition of face Download PDF

Info

Publication number
CN110520865A
CN110520865A CN201980001102.6A CN201980001102A CN110520865A CN 110520865 A CN110520865 A CN 110520865A CN 201980001102 A CN201980001102 A CN 201980001102A CN 110520865 A CN110520865 A CN 110520865A
Authority
CN
China
Prior art keywords
image
face
living body
target
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980001102.6A
Other languages
Chinese (zh)
Inventor
潘雷雷
吴勇辉
范文文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Huiding Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huiding Technology Co Ltd filed Critical Shenzhen Huiding Technology Co Ltd
Publication of CN110520865A publication Critical patent/CN110520865A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

A kind of face identification method, device and electronic equipment, can identify the true and false of face, so as to promote the safety of recognition of face.The method of the recognition of face includes: the first object image for obtaining the first identification target;The first object image is handled to obtain at least one first edge characteristic image;Determine whether the first identification target is living body faces based at least one described first edge characteristic image, and exports living body judging result;Feature templates matching is carried out according to the first object image, and exports matching result;Face recognition result is exported according to the living body judging result and the matching result.

Description

The method, apparatus and electronic equipment of recognition of face
Technical field
This application involves biometrics identification technology fields, and more particularly, to a kind of method of recognition of face, dress It sets and electronic equipment.
Background technique
Recognition of face is a kind of biological identification technology for carrying out identification based on facial feature information of people.With camera shooting Machine or camera acquire image or video flowing containing face, and automatic detection and tracking face in the picture, and then to detection The face arrived carries out image preprocessing, image characteristics extraction and a series of the relevant technologies such as matching and identification of face, usually Also referred to as Identification of Images or face recognition.With the rapid development of computer and network technologies, face recognition technology is widely Applied to all conglomeraties such as intelligent entrance guard, mobile terminal, public safety, amusement, military affairs and field.
Current face's identification generally uses two dimension (Two Dimensional, 2D) image based on face to be known Not, judge whether the 2D image is specific user's face, it is in other words, existing without judging whether the 2D image comes from living body faces Have in technology, the 2D recognition of face based on 2D image does not have antiforge function, poor safety performance.
Summary of the invention
The embodiment of the present application provides a kind of face identification method, device and electronic equipment, can identify the true and false of face, So as to promote the safety of recognition of face.
In a first aspect, providing a kind of method of recognition of face, comprising:
Obtain the first object image of the first identification target;
The first object image is handled to obtain at least one first edge characteristic image;
Determine whether the first identification target is living body faces based at least one described first edge characteristic image, and Export living body judging result;
Feature templates matching is carried out according to the first object image, and exports matching result;
Face recognition result is exported according to the living body judging result and the matching result.
The application provides one kind with anti-fraud functional face recognition scheme, at the target image to acquisition Reason obtains edge feature image, and anti-fake based on edge feature image progress face, whether is judging the edge feature image On the basis of living body faces, carry out whether feature templates matching judgment is user according to target image, to greatly improve The safety of face identification device and electronic equipment.
In one possible implementation, described that face is exported according to the living body judging result and the matching result Recognition result, comprising:
When the matching result is successfully, face recognition result is exported according to the living body judging result;Alternatively, institute State living body judging result be living body when, according to the matching result export face recognition result;Alternatively, being in the matching result When failure or the living body judging result are non-living body, face recognition result is exported.
In one possible implementation, described that feature templates matching is carried out according to the first object image and defeated Matching result out, comprising:
Face datection is carried out based on the first object image;
When Face datection success, the first facial image is obtained based on the first object image;
First facial image is matched with the multiple fisrt feature templates prestored;
When in first facial image and the multiple fisrt feature template any one fisrt feature template matching at When function, output matching result is successfully;Alternatively,
When first facial image and the multiple fisrt feature template matching fail, output matching result is to lose It loses;
Alternatively, output matching result is failure when Face datection failure.
In one possible implementation, the first object image is two-dimensional infrared image.
In one possible implementation, it is described to the first object image handled to obtain at least one first Edge feature image, comprising:
Convolutional calculation is carried out to the first object image using multiple low pass convolution kernels and obtains multiple first characteristics of low-frequency Image;
By two the first characteristics of low-frequency image subtractions different in the multiple first characteristics of low-frequency image obtain it is described extremely A first edge characteristic image in a few first edge characteristic image.
In one possible implementation, the low pass convolution kernel is Gaussian convolution core, the first edge characteristic pattern As being Gaussian function difference DOG image.
In one possible implementation, described to determine described the based at least one described first edge characteristic image Whether one identification target is living body faces, comprising:
At least one described first edge characteristic image is reduced to obtain at least one first object edge feature figure Picture determines whether the first identification target is living body faces based at least one described first object edge feature image.
In one possible implementation, described that institute is determined based at least one described first object edge feature image State whether the first identification target is living body faces, comprising:
Classification processing is carried out at least one described first object edge feature image by convolutional neural networks, with determination Whether the first identification target is living body faces.
In one possible implementation, the convolutional neural networks include: at least one convolutional layer, at least one swashs Encourage layer and at least one full articulamentum.
In one possible implementation, it is described by convolutional neural networks at least one described first object edge Characteristic image carries out classification processing, comprising:
By at least one described convolutional layer, convolutional calculation is carried out at least one described first object edge feature image Obtain multiple characteristic patterns;
By at least one described excitation layer, nonlinear processing is carried out to the multiple characteristic pattern and obtains multiple sparse spies Sign figure;
By at least one described full articulamentum, it is normal that multiple features are obtained to the full connection of the multiple sparse features figure progress Number;And classification processing is carried out to the multiple characteristic constant using classification function.
In one possible implementation, at least one described convolutional layer, at least one excitation layer number be 2, the number of at least one full articulamentum is 1.
In one possible implementation, at least one described convolutional layer includes the first convolutional layer and the second convolutional layer, The convolution step-length of first convolutional layer is 1, and the convolution step-length of second convolutional layer is 2.
In one possible implementation, the convolution kernel size at least one described convolutional layer is the square of 3*3 pixel Excitation function in battle array and/or at least one described excitation layer is parametrization amendment linear unit PReLU function and/or described Classification function at least one full articulamentum is Sigmoid function.
In one possible implementation, the method also includes:
Obtain the second target image of the second identification target;
Second target image is handled to obtain at least one second edge characteristic image;
The anti-fake differentiation of face is carried out based at least one described second edge characteristic image, with determination the second identification mesh Whether mark is living body faces, wherein the result of the anti-fake differentiation of face is for establishing skin detection.
In one possible implementation, second target image is the second infrared image.
In one possible implementation, the method also includes:
The skin detection is established based on second target image.
In one possible implementation, the method also includes:
Face datection is carried out based on second target image;
Wherein, described to establish skin detection based on second target image and include:
In Face datection success, facial image is carried out to second target image and shears to form the second facial image, The skin detection is established based on second facial image.
It is in one possible implementation, described that the skin detection is established based on second facial image, Include:
Judge whether second facial image belongs to skin detection library;
When second facial image belongs to the skin detection library, by second facial image and the people Multiple skin detections in face feature templates library are matched.
When second facial image is not belonging to the skin detection library, based at least one described second edge Characteristic image carries out the anti-fake differentiation of face, when determining the second identification target is living body faces, by the second face figure As being established as skin detection.
In one possible implementation, described will be in second facial image and the skin detection library Multiple skin detections are matched, comprising:
When successful match, the anti-fake differentiation of face is carried out based at least one described second edge characteristic image;
When determining the second identification target is living body faces, second facial image is established as face characteristic mould Plate.
In one possible implementation, described when successful match, based at least one described second edge feature Image carries out the anti-fake differentiation of face, comprising:
When successful match, the 3D point cloud data of the second identification target are obtained;
When the 3D point cloud data are available point cloud, face is carried out based at least one described second edge characteristic image Anti-fake differentiation.
In one possible implementation, it is described to second target image handled to obtain at least one second Edge feature image, comprising:
Convolutional calculation is carried out to second target image using multiple low pass convolution kernels and obtains multiple second characteristics of low-frequency Image;
By two the second characteristics of low-frequency image subtractions different in the multiple second characteristics of low-frequency image obtain it is described extremely A second edge characteristic image in a few second edge characteristic image.
In one possible implementation, the low pass convolution kernel is Gaussian convolution core.
In one possible implementation, described anti-based at least one second edge characteristic image progress face Puppet differentiates, comprising:
At least one described second edge characteristic image is reduced to obtain at least one second object edge characteristic pattern Picture carries out the anti-fake differentiation of face based at least one described second object edge characteristic image.
In one possible implementation, described that people is carried out based at least one described second object edge characteristic image The anti-fake differentiation of face, comprising:
Classification processing is carried out at least one described second object edge characteristic image by convolutional neural networks, with determination Whether the second identification target is living body faces.
In one possible implementation, the convolutional neural networks include: two convolutional layers, two excitation layers and One full articulamentum.
In one possible implementation, the convolution kernel in described two convolutional layers is the matrix of 3*3, convolution step-length point It Wei 1 and 2;And/or
Excitation function in described two excitation layers is parametrization amendment linear unit PReLU function;And/or
Classification function in one full articulamentum is Sigmoid function.
Second aspect provides a kind of device of recognition of face, including processor, for executing such as first aspect or first Face identification method in any possible implementation of aspect.
The third aspect provides a kind of electronic equipment, any possible realization including such as second aspect or second aspect Face identification device in mode.
Fourth aspect provides a kind of chip, which includes input/output interface, at least one processor, at least one A memory and bus, at least one processor for storing instruction, at least one processor for call this at least one Instruction in a memory, the method in any possible implementation to execute first aspect or first aspect.
5th aspect, provides a kind of computer-readable medium, for storing computer program, the computer program packet Include the instruction in any possible implementation for executing above-mentioned first aspect or first aspect.
6th aspect, provides a kind of computer program product including instruction, when computer runs the computer journey When the finger of sequence product, the computer is executed in any possible implementation of above-mentioned first aspect or first aspect The method of recognition of face.
Specifically, which can run on the electronic equipment of the above-mentioned third aspect.
Detailed description of the invention
Fig. 1 (a) figure is the schematic block diagram according to a kind of face identification device of the embodiment of the present application.
Fig. 1 (b) figure is the schematic flow chart according to a kind of face identification method of the embodiment of the present application.
Fig. 1 (c) figure is the schematic block diagram according to a kind of convolutional neural networks of the embodiment of the present application.
Fig. 2 is the schematic flow chart according to another face identification method of the embodiment of the present application.
Fig. 3 is the schematic flow chart according to another face identification method of the embodiment of the present application.
Fig. 4 is the schematic flow chart according to another face identification method of the embodiment of the present application.
Fig. 5 is the schematic flow chart according to another face identification method of the embodiment of the present application.
Fig. 6 is the schematic flow chart according to another face identification method of the embodiment of the present application.
Fig. 7 is the coordinate schematic diagram according to a kind of Gaussian convolution core of the embodiment of the present application.
(a) figure in Fig. 8 is the infrared image matrix according to the N*N pixel of the embodiment of the present application.
(b) figure in Fig. 8 is the two dimensional filter matrix according to the 3*3 of the embodiment of the present application.
(c) figure in Fig. 8 is according to the infrared image matrix in (a) figure in Fig. 8 by the 3*3 in (b) figure in Fig. 8 The gaussian filtering image array obtained after two dimensional filter matrix convolution.
Fig. 9 is the schematic figure according to a kind of acquisition DOG image of the embodiment of the present application.
(a) figure in Figure 10 is the DOG image according to multiple human face photos of the embodiment of the present application.
(b) figure in Figure 10 is the DOG image according to multiple living body faces of the embodiment of the present application.
Figure 11 is a kind of schematic stream of the anti-fake method of discrimination of face in the face identification method according to the embodiment of the present application Cheng Tu.
Figure 12 is the schematic of the anti-fake method of discrimination of another face in the face identification method according to the embodiment of the present application Flow chart.
Figure 13 is the schematic block diagram according to a kind of convolutional neural networks of the embodiment of the present application.
Figure 14 is the schematic block diagram according to another convolutional neural networks of the embodiment of the present application.
Figure 15 is a kind of full articulamentum schematic diagram according to the embodiment of the present application.
Figure 16 is a kind of schematic flow chart of face registration method in the face identification method according to the embodiment of the present application.
Figure 17 is the schematic flow of another face registration method in the face identification method according to the embodiment of the present application Figure.
Figure 18 is the schematic flow of another face registration method in the face identification method according to the embodiment of the present application Figure.
Figure 19 is the schematic block diagram according to a kind of face identification device of the embodiment of the present application.
Figure 20 is according to the schematic block diagram of the electronic equipment of the embodiment of the present application.
Specific embodiment
Below in conjunction with attached drawing, technical solutions in the embodiments of the present application is described.
The embodiment of the present application is applicable to optics face identification system, the including but not limited to production based on the imaging of optics face Product.The optics face identification system can be applied to the various electronic equipments with image collecting device (such as camera), the electricity Sub- equipment can be mobile phone, and tablet computer, intelligent wearable device, intelligent door lock etc., embodiment of the disclosure does not limit this It is fixed.
It should be understood that specific example herein is intended merely to that those skilled in the art is helped to more fully understand the application reality Example is applied, rather than limits the range of the embodiment of the present application.
It should also be understood that the formula in the embodiment of the present application is a kind of example, rather than the range of the embodiment of the present application is limited, Each formula can be deformed, these deformations also should belong to the range of the application protection.
It should also be understood that the size of the serial number of each process is not meant to execute sequence in the various embodiments of the application It is successive, the execution of each process sequence should be determined by its function and internal logic, the implementation without coping with the embodiment of the present application Journey constitutes any restriction.
It should also be understood that various embodiments described in this specification, both can individually implement, implementation can also be combined, The embodiment of the present application does not limit this.
Unless otherwise indicated, the technical field of all technical and scientific terms and the application used in the embodiment of the present application The normally understood meaning of technical staff it is identical.Term used in this application is intended merely to the mesh of description specific embodiment , it is not intended that limitation scope of the present application.Term "and/or" used in this application includes one or more relevant listed Any and all combinations of item.
In order to make it easy to understand, first combining Fig. 1 (a), Fig. 1 (b) and Fig. 1 (c), electricity is carried out to the recognition of face based on 2D image The unlocking process of sub- equipment is simply introduced.
As shown in Fig. 1 (a), face identification device 10 includes infraluminescence mould group 110,120 and of infrared image acquisition mould group Processor 130.Wherein, the infrared light illuminating module 110 can be infrared light light-emitting diodes for issuing infrared signal It manages (Light Emitting Diode, LED), or may be vertical cavity surface emitting laser (Vertical Cavity Surface Emitting Laser, VCSEL) etc. other infrared light illuminating sources, the embodiment of the present application do not limit this.Institute Stating infrared image acquisition mould group 120 can be infrared camera, including infrared image sensor, the infrared image sensor Corresponding electric signal is converted to for receiving infrared signal, and by received infrared signal, to generate infrared image.Institute Stating processor 130 can be a kind of microprocessor (Microprocessor Unit, MPU), can control the infraluminescence mould Group 110 and the infrared image acquisition mould group 120 carry out man face image acquiring, and carry out facial image identification.
Specifically, as shown in Fig. 1 (b), when needing to carry out recognition of face, specific 2D identification process is as follows:
S110: the 2D infrared image of acquisition identification target.Specifically, the infraluminescence mould group 110 issues infrared light, should Infrared irradiation identification target on, which can be user's face, or photo, 3D model or arbitrarily its Its object.It is infrared that infrared external reflection light by identification target surface reflection by infrared image sensor 120 is received and converted to 2D Image, the infrared image sensor 120 is by 2D infrared image delivery to processor 130.
S120: Face datection (face detection).2D infrared image is received, detects and whether is deposited on 2D infrared image In face.For example, using single convolutional neural networks (Convolutional Neural Networks, CNN) to the infrared figure of 2D As carrying out Face datection.It trains one first and judges the non-face Face datection convolutional neural networks of face, by 2D infrared image Data be input in Face datection convolutional neural networks, by convolutional calculation, by the spy of the data of 2D infrared image After sign is extracted, discriminant classification is carried out, to judge on the 2D infrared image with the presence or absence of face.
Specifically, as shown in Fig. 1 (c), convolutional neural networks mainly include 101 (convolutional of convolutional layer Layer), excitation layer 102 (activation layer), pond layer 103 (pooling layer) and full articulamentum 104 (fully-connected layer).Wherein, in convolutional Neural network every layer of convolutional layer by several convolution kernels (convolutional kernel) composition, the parameter of each convolution kernel is optimized by back-propagation algorithm.Volume The purpose of product operation is to extract the different characteristic of input, and different convolution kernels extracts different characteristic patterns (feature map), more The convolutional network of multilayer can from the low-level features such as edge feature, linear feature the more complicated feature of iterative extraction.Excitation layer makes Introduced with excitation function (activation function) to convolutional neural networks non-linear, common excitation function has Sigmoid, tanh, ReLU function etc..The very big feature of dimension can be obtained usually after convolutional layer, feature is cut by pond layer Several regions take its maximum value (max pooling) or average value (average pooling), obtain that new, dimension is lesser Characteristic pattern.Full articulamentum, which combines all local features, becomes global characteristics, for calculating the score of last every one kind, to sentence The classification of the data of disconnected input.
S121: if there are faces on 2D infrared image, face shearing is carried out to 2D infrared image.It specifically, will be above-mentioned The full articulamentum of Face datection convolutional neural networks is changed to convolutional layer, and such network becomes full convolutional network, 2D infrared image Characteristic pattern will be obtained by full convolutional network, each " point " corresponding position is mapped to original image region and belongs to face on characteristic pattern Probability, face probability will be belonged to greater than given threshold and be considered as face candidate frame.It will be in 2D infrared image in face candidate frame Image cut form new face 2D infrared image.
S122: if face is not present on 2D infrared image, parameter will be restarted and add 1.
If face is not present on 2D infrared image, Face datection failure, in other words, which is not user, With failure.
Optionally, can also by cascade CNN, Dlib, the methods of OpenCV carry out Face datection, and shear obtain it is new Face 2D infrared image.It is not limited this in the embodiment of the present application.
S130:2D recognition of face (face recognition).The S131 face 2D infrared image formed is known , do not judge face 2D infrared image whether be user face.For example, the method using convolutional neural networks carries out face knowledge Not, specifically, the recognition of face convolutional neural networks for judging whether it is user's face, the recognition of face convolution are trained first Neural network is classified according to multiple feature templates in template library.The data of face 2D infrared image are input to recognition of face volume In product neural network, by convolutional calculation, after the feature extraction of the data of face 2D infrared image, carries out classification and sentence Not, judge whether face 2D infrared image matches with feature templates multiple in template library.
S131: if successful match, face 2D infrared image is the facial image of user, and 2D is identified successfully.Further , the electronic equipment where face identification device 10 can be unlocked, application program on electronic equipment can also be unlocked.
S132: if it fails to match, face 2D infrared image is not the facial image of user, then 2D recognition failures, will Restart parameter and adds 1.
S140: judgement restarts whether parameter is less than first threshold.
S141: if restarting parameter less than first threshold, enter S110;
S142: if restarting parameter more than or equal to first threshold, recognition failures.
In Fig. 1 (b), face identification device 10 judges that the 2D image of face is by the 2D infrared image of acquisition face The no eigenface met in eigenface template library carries out recognition of face, thus to answering on electronic equipment and electronic equipment It is unlocked with program (application, APP).Since in unlocking process, face identification device 10 is only only in accordance with 2D image On two dimensional character carry out recognition of face, can not identify acquisition 2D infrared image whether come be originated from living person's face or other photograph Other everybody non-live face objects such as piece, video, in other words, which does not have antiforge function, can pass through robber The information such as photo, the video with user's face are taken, electronic equipment and application program are unlocked, thus recognition of face fills It sets and the security performance of electronic equipment receives strong influence.
To solve the above problems, the embodiment of the present application provides one kind with anti-fraud functional face recognition scheme, by right The infrared image of acquisition is handled, at least one edge feature image is obtained, and carries out people based on multiple edge feature image Face is anti-fake, judges identify whether target is living body faces, to greatly improve the safety of face identification device and electronic equipment.
In the following, describing in detail in conjunction with Fig. 2 to Figure 18 to face identification method provided by the embodiments of the present application.
Fig. 2 is a kind of method 200 of recognition of face provided by the embodiments of the present application, comprising:
S210: the target image of identification target is obtained;
S220: the target image is handled to obtain at least one edge feature image;
S230: determine whether the identification target is living body faces based at least one described edge feature image, and defeated Living body judging result out;
S240: feature templates matching is carried out according to the target image, and exports matching result;
S250: face recognition result is exported according to the living body judging result and the matching result.
It should be understood that the identification target is also referred to as the first identification target, second identification target etc., can be used for distinguishing difference Target object, correspondingly, the target image and eyes image of the identification target are referred to as first object image or the Two target images, the first eyes image or the second eyes image etc..The identification target include but is not limited to face, photo, The arbitrary objects such as video, threedimensional model.For example, the identification target can be user's face, other people face, Yong Huzhao Piece, the surface model for posting photo etc..
Optionally, the target image can for visible light generate color image, or infrared photogenerated it is red Outer image or other images, the embodiment of the present application do not limit this.The target image is handled to obtain edge spy Sign image is the image for embodying marginal information in target image, such as embodies color image or the lines in infrared image, side The colors such as boundary or the big composition information of grey scale change gradient.
Preferably, in the embodiment of the present application, the target image is infrared image, hereafter with target image for infrared figure It is described in detail as.Specifically, infrared (Infrared Radiation, the IR) image appearance is gray scale (Gray Scale) image identifies the face shaping of target by the expressing gradation of image slices vegetarian refreshments.
Optionally, in the embodiment of the present application, the infrared figure of identification target can be obtained by infrared image acquisition device Picture may include infrared photoelectric sensor in the infrared image acquisition device, wherein include multiple in infrared photoelectric sensor Pixel unit, each pixel unit are used to acquire reflection infrared signal of the infrared light after identification target surface reflection, and The reflection infrared signal is converted to the pixel electrical signal of its corresponding light intensity.The value of each pixel electrical signal corresponds to infrared One pixel of image, size show as the gray value of infrared image.Therefore, the picture element matrix of multiple pixel unit compositions The infrared image of formation can also be expressed as the numerical matrix of the gray value composition of multiple pixels.Optionally, each pixel The intensity value ranges of point are between 0~255, and gray value 0 shows as black, and gray value 255 shows as white.
Specifically, the infrared image is handled to obtain at least one edge feature image to embody in infrared image The image of fringe region, contrast difference is obvious in fringe region, that is, image, shade of gray changes region greatly, and the region is general For image outline, lines region.Optionally, the edge feature image includes but is not limited to: embodying the height of image high-frequency information Frequency characteristic image, characteristic image that the convolution kernel convolutional calculation by edge detection obtains or other can enhance marginal zone The edge feature image of characteristic of field.
It should be understood that texture letter different in the infrared image of different identification targets can be presented in the edge feature image Breath, can be used for distinguishing living body faces and non-living body face, in other words, is handled to obtain to the infrared image of living body faces Edge feature image it is different from the edge feature image that the infrared image to non-living body face is handled, and difference Larger, the embodiment of the present application can be using the edge feature figure that can arbitrarily distinguish living body faces and non-living body face, herein It is not specifically limited.Wherein, the non-living body face includes but is not limited to: user's human face photo, user's face video are placed in User's human face photo on three-dimension curved surface, user faceform etc..
After the edge feature image for obtaining the identification target, carry out that face is anti-fake sentences based on the edge feature image Not, whether it is the texture of living body faces with the texture of the determination identification target, to judge whether the identification target is living Body face achievees the effect that face is anti-fake.
Specifically, in the process of face recognition, in addition to judging identify whether target is living body faces, it is also necessary to carry out special Template matching is levied, binding characteristic template matching and living body judging result carry out recognition of face.The feature templates matching is will Target image is matched with the feature templates of at least one user, it can be determined that whether the target image belongs to the figure of user Picture.Optionally, this feature template is multiple faces or local facial of the user under different angle, the different conditions such as varying environment The characteristic of image.The feature templates are stored in the device of recognition of face, particularly, can store depositing in a device In reservoir.
In conjunction with the anti-fake judgement of face and feature templates matching judgment, the reliability of face recognition process can be enhanced, mentioned Rise security performance.
Currently, face is anti-fake different security levels, as shown in table 1 below, it is anti-that different grades represents different faces Puppet requires.I.e. for example: when anti-fake grade is grade 1, can recognize that 2D prints Static planar face.
Table 1
Face identification device and face identification method in Fig. 1 (a) and Fig. 1 (b) can not judge that the 2D image of acquisition comes From photo or true man's face, because being unable to reach the grade 1 of the anti-fake grade of face in table 1 without antiforge function.But at this Apply since the texture information of face can be obtained by edge feature image, thus to identify living body people in embodiment Face and non-living body face, so as to reach the anti-fake class 5 of face, anti-fake and identification security performance is largely increased.
Specifically, in the embodiment of the present application, 2D identification can be carried out based on the 2D target image of the identification target of acquisition Feature templates matching, and based on 2D identification feature templates matching result and the anti-fake judgement of face result carry out face knowledge Not and export face recognition result.
In the embodiment of the present application, when feature templates are 2D image, feature templates matching is a master in 2D identification Step and embodiment are wanted, hereinafter, 2D identification is it can be appreciated that the feature templates in 2D identification match.
It is alternatively possible to first carry out 2D identification, on the basis of 2D identification, at least one is based on according to the result of 2D identification It is anti-fake that the edge feature image carries out face again, keeps identification process more safe and effective.For example, as shown in figure 3, the application The method 300 for another recognition of face that embodiment provides, comprising:
S310: the infrared image of the identification target is obtained;
S340: 2D identification is carried out based on the infrared image;
When any one feature templates successful match in target image and multiple feature templates, then 2D is identified successfully, Indicate that the target image includes user's facial image.When it fails to match for target image and multiple feature templates, then 2D is identified Failure, indicating the target image not includes user's facial image.
Optionally, in the embodiment of the present application, the 2D identification can be identical or close as the 2D identification process in Fig. 1 (b) Seemingly.
S351: when 2D is identified successfully, the infrared image is handled to obtain at least one edge feature image;
S352: it in 2D recognition failures, determines that recognition of face fails, exports the first face recognition result;
Optionally, first face recognition result can include but is not limited to the specifying informations such as failure, non-authentication user.
S360: the anti-fake differentiation of face is carried out based at least one described edge feature image, with the determination identification target It whether is living body faces;
S371: it when the identification target is living body faces, determines recognition of face success, exports the second recognition of face knot Fruit;
Optionally, second face recognition result can include but is not limited to successfully, the specific letter such as living body authentication user Breath.
S372: it when the identification target is not living body faces, determines that recognition of face fails, exports third recognition of face knot Fruit.
Optionally, it is specific to can include but is not limited to failure, non-living body certification user etc. for the third face recognition result Information.
Optionally, the target image can be infrared image, visible images or other images.
Optionally, can be anti-fake with advanced pedestrian's face, on the basis of face is anti-fake, according to the anti-fake result of face again into The case where row 2D is identified, can be excluded non-living body face in advance, improves the efficiency of identification.For example, as shown in figure 4, the application is real The method 400 of another recognition of face of example offer is provided, comprising:
S410: the infrared image of identification target is obtained;
S420: the infrared image is handled to obtain at least one edge feature image;
S430: the anti-fake differentiation of face is carried out based at least one described edge feature image, with the determination identification target It whether is living body faces;
S441: when the identification target is living body faces, 2D identification is carried out based on the infrared image;
Optionally, the 2D identification in the step can be identical as step S340 in Fig. 4, before specific embodiment can refer to Scheme is stated, details are not described herein again.
S442: it when the identification target is non-living body face, determines that recognition of face fails, exports the 4th recognition of face knot Fruit;
Optionally, the 4th face recognition result can include but is not limited to the specifying informations such as failure, non-living body.
S471: it when 2D is identified successfully, determines recognition of face success, exports the 5th face recognition result.
Optionally, the 5th face recognition result can include but is not limited to successfully, the specific letter such as living body authentication user Breath.
S472: it in 2D recognition failures, determines that recognition of face fails, exports Sixth Man face recognition result.
Optionally, it is specific to can include but is not limited to failure, living body non-authentication user etc. for the Sixth Man face recognition result Information.
Optionally, it in a kind of possible embodiment, by transmitting infrared light to the identification target, receives described red Reflection infrared signal of the outer light after the identification target reflection, and the reflection infrared signal is converted to described red Outer image.For example, a kind of infrared light emission mould group emits infrared light to the identification target, Image Acquisition mould group receives described red Infrared signal of the outer light after the identification target reflection, and the reflection infrared signal is converted to infrared image.
Optionally, in step S310 and step S410, the red of identification target can be obtained by Image Acquisition mould group Outer image.The Image Acquisition mould group can be the infrared image acquisition mould group 120 in Fig. 1 (a).
It optionally, specifically can also include: 3D human face rebuilding in step S351.I.e. when 2D is identified successfully, acquisition identifies mesh Target 3D data, to carry out 3D human face rebuilding, if the success of 3D human face rebuilding, is handled to obtain at least to the infrared image One edge feature image, and the anti-fake differentiation of face is carried out based at least one described edge feature image, if 3D human face rebuilding Failure, then without the anti-fake differentiation of face.Specifically, the face figure after reconstruction reflects the feature letter of face from three-dimensional space Breath carries out the anti-fake differentiation of face on the basis of 3D face is successful.
Optionally, as shown in figure 5, the face identification method 300 further include:
S320: Face datection specifically carries out Face datection based on the infrared image;
S331: there are faces, i.e., when detecting face on infrared image, carry out face to infrared image and shear to obtain people Face infrared image;
S332: face is not present and restarts parameter when that is, Face datection fails and adds 1;
S352:2D recognition failures determine that recognition of face fails, restart parameter and add 1;
S373: when the identification target is not living body faces, restarts parameter and add 1;
Optionally, as shown in fig. 6, the face identification method 400 further include:
S444: when the identification target is non-living body face, restarts parameter and add 1;
When the identification target is living body faces, step S450: Face datection is carried out;Specifically, based on described infrared Image carries out Face datection;
S451: there are faces, i.e., when detecting face on Infrared image, carry out face to infrared image and shear to obtain Face infrared image;
S452: face is not present and restarts parameter when that is, Face datection fails and adds 1;
S445:2D identification specifically carries out 2D identification based on the face infrared image.
S473: in 2D recognition failures, restart parameter and add 1;
Optionally, the step S320~S332 and step S450~S452 can be with step S120~step in Fig. 1 (b) S122 is identical, and details are not described herein again.
Optionally, in the embodiment of Fig. 5 and Fig. 6, method further include: the size for restarting parameter is judged, When restarting parameter less than second threshold, then enters S310 or enter S410;When restarting parameter more than or equal to second threshold, Then determine recognition failures.
Below with reference to Fig. 7 to Figure 15 be discussed in detail in S360 and S430 based at least one described edge feature image into The anti-fake differentiation of pedestrian's face, with the determination identification target whether be living body faces process, i.e. the anti-fake detailed process of face.
Optionally, in the embodiment of the present application, the edge feature image can be Gaussian function difference (Difference Of Gaussian, DOG) image, the edge feature in gray level image and detection image can be enhanced.Its acquisition process are as follows: will Infrared image and multiple Gaussian convolution cores obtain multiple gaussian filtering images of infrared image after carrying out convolutional calculation, will be multiple high Any different two images in this filtering image, which are subtracted each other, obtains a DOG image.
Specifically, Gaussian convolution core is the two-dimensional matrix with normal distribution characteristic, the density function with normal distribution For Gaussian function (Gaussian function), two-dimentional formula is as follows:
Wherein, x, y are two-dimensional coordinate point, and σ is the standard deviation criteria of Gaussian kernel, change the filtering of Gaussian kernel by changing σ Effect obtains different gaussian filtering images.
It should be understood that the Gaussian convolution core can be 3*3 matrix, the matrix of 5*5 matrix or other odd sizeds, this Shen Please embodiment do not limit this.
It should also be understood that in the embodiment of the present application, the size of multiple Gaussian convolution cores can be same or different, Duo Gegao The convolution step-length of this core can be same or different, and the embodiment of the present application does not limit this.
Preferably, in the embodiment of the present application, the size of the multiple Gaussian convolution core is identical and convolution step-length is identical.
Hereinafter, for convolution step-length is 1, illustrating the method stream for obtaining DOG image with multiple 3*3 size Gaussian convolution cores Journey.It is assumed that the centre coordinate of the Gaussian convolution core of the 3*3 is (0,0), then the coordinate of the Gaussian convolution core is as shown in Figure 7.According to Above-mentioned Gaussian function formula sets σ value, a weight matrix is calculated, is weighted to 9 values in the weight matrix flat , the sum of its weight is made to be equal to 1, the weight matrix after weighted average is a Gaussian convolution core.Likewise, not by setting Same σ value, obtains multiple and different Gaussian convolution core.
Specifically, infrared image and Gaussian convolution core are subjected to convolutional calculation process are as follows: for each of infrared image Pixel, calculate it neighborhood territory pixel and electric-wave filter matrix corresponding element product, then add up, as the location of pixels Value, to obtain the gaussian filtering image after convolution.
For example, (a) figure in Fig. 8 is the infrared image matrix of N*N pixel, wherein a1,1~an,nFor in infrared image matrix Grey scale pixel value.The Gaussian convolution core that (b) figure in Fig. 8 is 3*3, including x1To x9This 9 numerical value.(c) figure in Fig. 8 is The gaussian filtering image array that infrared image matrix obtains after 3*3 Gaussian convolution nuclear convolution.With wherein first pixel value b1,1For, calculation formula is as follows:
b1,1=x1a1,1+x2a1,2+x3a1,3+x4a2,1+x5a2,2+x6a2,3+x7a3,1+x8a3,1+x9a3,3
b11After the completion of calculating, when convolution step-length is 1, the region 3*3 chosen in infrared image element matrix is slided to the right one Step is added in this way, after being multiplied with the numerical value of the corresponding position in two dimensional filter matrix to get b is arrived12Value.
Optionally, according to above-mentioned convolutional calculation mode, the Gaussian convolution core of infrared image and multiple 3*3 are subjected to convolution It calculates, obtains multiple gaussian filtering images, the σ of the Gaussian convolution core of multiple 3*3 is different, and therefore, correspondence obtains multiple high This filtering image is different.Multiple and different gaussian filtering images is carried out pixel to subtract each other to obtain multiple DOG images.Optionally, may be used The first gaussian filtering image in multiple and different gaussian filtering images successively to be subtracted each other to obtain with other gaussian filtering images Multiple DOG images;Alternatively it is also possible to multiple and different gaussian filtering images is subtracted each other two-by-two to obtain multiple DOG images, Edge feature for different degrees of expression infrared image.
For example, as shown in figure 9, infrared image F and m Gaussian convolution core K, which is carried out convolution, obtains m gaussian filtering image G, by the first gaussian filtering image G in m gaussian filtering image1With the second gaussian filtering image G2Subtract each other to obtain first DOG Image DOG1, and so on, by the i-th gaussian filtering image G in m gaussian filtering imageiWith i+1 gaussian filtering image Gi+1 Subtract each other to obtain i-th of DOG image DOGi, wherein 1≤i≤m.After convolutional calculation by Gaussian convolution core, realize to infrared The gaussian filtering of image, so that image is smoothened fuzzy, but the apparent corner point of contrast on image, it can not be good It is smoothed, to be retained.Since different Gaussian convolutions verifies existing different smoothness, filtered by different Gausses Contrast changes apparent angle point border area pixels value greatly in original infrared image on DOG image after wave image subtraction, table It is now obvious, and the lesser large area region pixel value of contrast variation is small, shows unobvious.
In the embodiment of the present application, when identifying target is human face photo or living body faces, since human face photo is two Dimensional plane structure, living body faces are three-dimensional structure, the photo infrared image that human face photo acquisition is obtained with to living body people The face infrared image that face is shot is compared, and facial contour texture is few and clear in face infrared image, is shown as in image Grey scale change fringe region, skin area area is big and gray scale balance, and in photo infrared image, by environmental disturbances etc. Other influences, gray scale texture is more and fuzzy in photo.For example, as shown in Figure 10, (a) in Figure 10 shows multiple faces The DOG image of photo, (b) in Figure 10 show the DOG image of multiple living body faces.By (a) figure and Figure 10 in Figure 10 In (b) figure be compared, it can be seen that in the DOG image of the photo face of (a) figure in Figure 10 Clutter edge texture compared with It is more, cause face texture profile unintelligible.And face texture wheel in the DOG image of the living body faces photo of (b) figure in Figure 10 Chu is cleaned up, interference texture is few.
Optionally, in a kind of possible embodiment, after being pre-processed at least one described edge feature image, Classification processing is carried out at least one pretreated edge feature image using deep learning network, with the determination identification mesh Whether mark is living body faces.In the embodiment of the present application, deep learning network includes but is not limited to convolutional neural networks, can be with For other deep learning networks, the embodiment of the present application is not limited this, and below by taking convolutional neural networks as an example, illustrates the application Classification processing method in embodiment.
For example, as shown in figure 11, the anti-fake method of discrimination 500 of face includes:
S510: at least one edge feature image is reduced to obtain at least one object edge characteristic image;
S520: classification processing is carried out at least one object edge characteristic image by convolutional neural networks, to determine State whether identification target is living body faces.
Optionally, when the edge feature image is DOG image, as shown in figure 12, the anti-fake method of discrimination of face 501 include:
S511: at least one DOG image is reduced to obtain at least one target DOG image;
S521: classification processing is carried out at least one target DOG image by convolutional neural networks, with the determination identification Whether target is living body faces.
Specifically, at least one DOG image progress image scaling (Resize) is obtained at least one target DOG image. It scales i.e.: DOG image is zoomed in or out according to target size.In the embodiment of the present application S511, DOG image is contracted It is small to obtain target DOG image to accelerate the speed of data processing, reduce the response time in face recognition process.
Optionally, scaled to DOG image using the methods of closest differential technique or bilinearity differential technique.Example Such as: former DOG figure size is M*M, and the target image size after diminution is A*A, and the side ratio of two images is M/A.Target image The corresponding original DOG image of (i, j) a pixel in coordinate be (i*M/A, j*M/A).And the coordinate is not usually integer.
Diminution processing is carried out according to arest neighbors interpolation method, then is (i*M/ to the coordinate in the former DOG image being calculated A, j*M/A) it directly rounds up to obtain the coordinate of the point nearest apart from it, such as round up and sat to (0.75,0.25) It marks (1,0), the corresponding pixel value of the coordinate is exactly the value of corresponding pixel on target image.
Diminution processing is carried out according to bilinear interpolation, it is assumed that a pixel of target DOG image (i, j) is mapped to original Coordinate in DOG figure is (i+u, j+v), and wherein u and v is the fractional part of i+u and j+v.Then a pixel of target image (i, j) The pixel value of point is that the pixel value that the coordinate corresponded in former DOG image is (i+u, j+v) is as follows:
F (i+u, j+v)=(1-u) (1-v) f (i, j)+v (1-u) f (i, j+1)+u (1-v) f (i+1, j)+uvf (i+1, j+ 1)
Optionally, by above-mentioned closest differential technique or bilinearity differential technique by least one DOG image proportionally After being reduced at least one target DOG image, at least one target DOG image is inputted in convolutional neural networks and is carried out at classification Reason.When multiple target DOG images are input to progress classification processing in convolutional neural networks, the convolutional neural networks are to multiple The feature of target DOG image extracts and classifies, and can be improved the accuracy of identification, improves security performance.
Firstly, building convolutional neural networks structure, such as can be using two layers of convolutional neural networks or more knot The composition of structure, every layer of convolutional network structure can also be adjusted according to face information to be extracted, and the embodiment of the present application is to this It is not construed as limiting.
Secondly, the initial training parameter and the condition of convergence of the convolutional neural networks is arranged.
Optionally, in the embodiment of the present application, which can be randomly generated, or obtain based on experience value It takes, or is also possible to the parameter according to the good convolutional neural networks model of a large amount of true and false human face data pre-training, this Shen Please embodiment this is not construed as limiting.
Then, the DOG image of a large amount of user's living body faces and non-living body face, the volume are inputted to the convolutional neural networks Product neural network can be handled above-mentioned DOG image based on initial training parameter, determine the judgement knot to each DOG image Fruit, further, according to the judgement as a result, adjusting the structure of convolutional neural networks and/or the training parameter of each layer, until determining As a result meet the condition of convergence.
Optionally, in the embodiment of the present application, which may include at least one of the following:
1, the probability for the DOG image that the DOG spectral discrimination of living body faces is living body faces is greater than the first probability, for example, 98%;
2, the probability for the DOG image that the DOG spectral discrimination of non-living body face is non-living body face is greater than the second probability, example Such as 95%;
3, the probability for the DOG image that the DOG spectral discrimination of living body faces is non-living body face is less than third probability, example Such as, 2%;
4, by the probability for the DOG image that the DOG spectral discrimination of non-living body face is living body faces less than the 4th probability, such as 3%.
Completion judges whether it is the convolutional neural networks of living body faces after training, in the process of face recognition, will It handles the obtained DOG image of current identification target to be input in the convolutional neural networks, so that the convolutional neural networks can be with It is handled using DOG image of the trained parameter to identification target, determines whether the identification target is living body faces.
Optionally, in a kind of possible embodiment, as shown in figure 13, the convolutional neural networks 50 include convolutional layer 510, excitation layer 520 and full articulamentum 530.The convolutional layer 510 is used to carry out at least one target DOG image of input Convolutional calculation at least once extracts feature from least one target DOG image.
Wherein, convolutional layer 510 includes multiple and different convolution kernel.By sliding convolution kernel on the image and dot product calculates It is called convolution feature (convolved feature) figure to matrix, also referred to as activation figure (activation map) or feature Scheme (feature map).For same input picture, the convolution kernel of different value will generate different characteristic patterns.By repairing Change the numerical value of convolution kernel, different features can be detected from figure.
It optionally, can be with the convolution meter in Fig. 5 by the convolutional calculation process of a target DOG image and a convolution kernel Calculation process is identical, uses and slides a pixel value (step-length 1) each time on target DOG figure and calculated with convolution kernel. X pixel value being slided each time on target DOG figure, (step-length x) is calculated with convolution kernel, the embodiment of the present application pair This is without limitation.
Optionally, when Z target DOG image is carried out convolutional calculation, a convolutional layer includes n group convolution kernel, and one group Include Z convolution kernel in convolution kernel, is added after carrying out convolution with Z target DOG image respectively, obtains this group of convolution nuclear convolution meter Obtained characteristic pattern, therefore, n group convolution kernel export n characteristic pattern.
Optionally, in the embodiment of the present application, convolution kernel can be the matrix of other sizes such as 3*3,5*5 or 7*7, this Apply embodiment to this also without limitation.
In a preferred embodiment, as shown in figure 14, the convolutional layer 510 includes the first convolutional layer 511 and the Two convolutional layers 512, wherein the first convolutional layer 511 and the second convolutional layer 512 are all made of the convolution kernel of 3*3, the first convolutional layer 511 Step-length be 1, the step-length of the second convolutional layer 512 is 2, and target DOG is schemed to obtain n and open not after the convolution kernel different from n group calculate Same characteristic pattern, wherein n different characteristic patterns are extracted the different characteristic informations at least one target DOG figure respectively, The n different characteristic patterns are also referred to as n channel of convolutional layer output.
It include excitation function in excitation layer 520, each pixel value in the characteristic pattern for obtaining to convolution carries out non-thread Propertyization processing.Optionally, excitation function includes but is not limited to correct linear unit (Rectified Linear Unit, ReLU) Several variant shapes of function, index linear unit (exponential linear unit, ELU) function and ReLU function Formula, such as: band leakage amendment linear unit (Leaky ReLU, LReLU), parametrization amendment linear unit (Parametric ReLU, PReLU), it corrects at random linear unit (Randomized ReLU, RReLU) etc..
Preferably, in the embodiment of the present application, the excitation function used corrects linear unit PReLU function for parametrization, Specifically, the formula of PReLU function is as follows:
Wherein, i indicates i-th of channel, aiIndicate the parameter in i-th of channel, the parameter in different channels can it is identical or It is different.
In the embodiment of the present application, 0 < i≤n, n channel, that is, n characteristic patterns are respectively adopted above-mentioned PReLU function and carry out Activation.By in PReLU treated i-th characteristic pattern, the pixel value less than or equal to 0 becomes aixi, pixel value greater than 0 protects Hold it is constant so that pixel value in characteristic pattern has a sparsity, PReLU realize it is sparse after neural network structure can be preferably Correlated characteristic is excavated, training data is fitted.Specifically, n opens different characteristic patterns after PReLU function nonlinear processing, Obtain n sparse features figures.
Preferably, as shown in figure 14, in the embodiment of the present application, the excitation layer 520 includes the first excitation layer 521 and the Two excitation layers 522, first excitation layer 521 is after the first convolutional layer 511, to the characteristic pattern of the first convolutional layer 511 output Multiple first sparse features figures are obtained by nonlinear processing, second convolutional layer 512 and the second excitation layer 522 are described First excitation layer 521 carries out at convolution sum non-linearization again in the first convolutional layer 511 and then by multiple first sparse features figures Then reason is input to full articulamentum 530 again and carries out connecting simultaneously classification processing entirely.
Specifically, each node is connected with each node in upper layer in the full articulamentum 530, for the nerve net by before The feature extracted in network is integrated, and plays the role of " classifier " in entire convolutional neural networks.For example, such as Figure 15 Shown, f1~fn is the node of upper one layer output, and full articulamentum 330 includes the full link node C of m altogether, export m constant or Matrix y, convenient for carrying out again connection entirely to m constant or Matrix Classification or judging to classify.Specifically, m full connection knots Each of point node includes the multiple parameters that above-mentioned training convergence obtains, for f1~fn to be weighted connection, most A constant or matrix result y are obtained eventually.
In the following, f1~fn is for n opens sparse features figures, to the full connection procedure of full articulamentum in the embodiment of the present application For be illustrated.
The size of n sparse features figure f1~fn is A*A, and each full articulamentum node includes the convolution of n A*A size Core, therefore, m full link nodes include the convolution kernel of m*n A*A size altogether.For each full articulamentum node, by n The convolution kernel of A*A size is added after being multiplied with n sparse features figures and obtains a constant.Therefore, connection knot complete for m M constant is obtained in point.
Optionally, when the convolution kernel in m full link nodes is less than A*A, for m full link nodes, m are obtained Matrix.
Optionally, further include classification function Sigmoid in full articulamentum 530, the constant of full articulamentum output is divided Class differentiates.
Wherein, the formula of Sigmoid function is as follows:
In Sigmoid function, when input tends to be just infinite or bears infinite, function approaches smooth state, Sigmoid letter For number because output area is 0 to 1, the probability of two classification usually uses this function.Sigmoid function is handled more A probability value judged, thus obtain the anti-fake differentiation of final face as a result, whether being living body with the determination identification target Face.
It should be understood that in the embodiment of the present application, convolutional neural networks 50 can also include: at least one convolutional layer, at least The structure of one excitation layer and/or at least one full articulamentum, such as: the structure of the full articulamentum of convolutional layer-excitation layer-, or The structure of the full articulamentum of the full articulamentum-of convolutional layer-excitation layer-, the embodiment of the present application do not limit this.
It should also be understood that the deconvolution parameter in multiple convolutional layers can be different, the excitation function that multiple excitation layers use can be with Difference, and full Connecting quantity can also be different in multiple full connections.The embodiment of the present application to this also without limitation.
In above-mentioned application embodiment, the anti-fake differentiation of face is carried out based on the edge feature image, with the determination knowledge Whether other target is living body faces, wherein the result of the anti-fake differentiation of face is used for recognition of face.
Optionally, the result of the anti-fake differentiation of the face can be also used for face registration, i.e. generation 2D face recognition process Middle skin detection.Specifically, during face registration be added face it is anti-fake, prevent by according to human face photo or its The collected photo of model of its non-living body face carries out recognition of face matching as template, and the accurate of 2D identification can be improved Property.
Specifically, as shown in figure 16, the face registration method 700 includes:
S710: the target image of identification target is obtained.
S720: the target image is handled to obtain at least one edge feature image;
S730: the anti-fake differentiation of face is carried out based at least one described edge feature image, with the determination identification target It whether is living body faces, wherein the result of the anti-fake differentiation of face is for establishing skin detection.
It should be understood that face registration method process and above-mentioned face identification method process are two independences in the embodiment of the present application Stage, be only the judgement of the skin detection established during register method for 2D identification in face recognition process.In After establishing skin detection by face registration method, pass through above-mentioned face identification method and the anti-fake method of discrimination of face Carry out recognition of face.
It should also be understood that the identification target in the embodiment of the present application can be with the identification target phase in above-mentioned face recognition process It is same or different, for example, can be user's living body faces, user's living body faces be registered and identified;Or note Identification target during volume is user's living body faces, but the identification target in identification process is other non-living body faces.This Shen Please embodiment do not limit this.
Optionally, the step S710 can be identical as above-mentioned steps S210, obtains identification mesh by image collecting device Target target image.Optionally, the target image is infrared image or otherwise visible light color image.
Optionally, the step S720 can be identical as above-mentioned steps S220.Optionally, the edge feature image is body The image of marginal information in existing image, such as: DOG image etc..Specifically, target image is handled to obtain corresponding DOG image Method may refer to the description of above-mentioned application embodiment, details are not described herein again.
Optionally, the anti-fake differentiation of face is carried out based at least one described edge feature image in the step S730, with Determine whether the identification target is living body faces, the anti-fake method of discrimination 500 of above-mentioned recognition of face or recognition of face can be used Anti-fake method of discrimination 501 is differentiated that specific descriptions are referred to above-mentioned application embodiment, and details are not described herein again.
Optionally, in the embodiment of the present application, face registration method further include: face spy is established according to the target image Levy template.
In a kind of possible embodiment, when target image is infrared image, the infrared figure of identification target is first obtained Picture carries out template matching based on the infrared image, carries out on the basis of successful match anti-fake.
For example, Figure 17 shows a kind of face registration methods 800, comprising:
S810: the infrared image of identification target is obtained;
S850: template matching is carried out based on the infrared image;
S851: in template matching success, the infrared image is handled to obtain at least one edge feature image;
S852: when template matching failure, skin detection is not established;
S860: based at least one edge feature image carry out the anti-fake differentiation of face, with the determination identification target whether For living body faces;
S871: when the identification target is living body faces, storage infrared image is skin detection;
S872: when the identification target is not living body faces, not storing infrared image is skin detection.
Wherein, optionally, step S810 can be identical as step S310.Step S851 can be identical as step S351.Step Rapid S860 can be identical as step S360.
Optionally, step S850 can be similar based on infrared image progress 2D identification with step S340, by the infrared image It is matched with multiple skin detections in skin detection library, if successful match, the people's face infrared image is to use The facial image at family, if it fails to match, the people's face infrared image is not the facial image of user.
Optionally, in step S871, when identifying target is living body faces, it is single that the data of infrared image are stored in storage In member, as a new skin detection in skin detection library, which can be executor's face registration side Storage unit in the processor of method, or execute the memory in the electronic equipment of face register method.
Optionally, as shown in figure 18, face registration method 800 can also include:
S820: Face datection;
S821: when there are when face, carrying out face on Face datection to the infrared image to infrared image and shear to obtain Face infrared image;
S822: when face is not present on Face datection to the infrared image, restarts parameter and add 1;
Optionally, step S820 to step S822 can be identical to step S332 as step S320.
S830:3D human face rebuilding;
Specifically, carrying can be received after identification target surface reflection by emitting structural light or light pulse The catoptric arrangement light or reflection light pulse of target surface information are identified, to obtain the 3D data of identification target, the 3D data The depth information of identification target is contained, can indicate the surface shape of identification target.The 3D data can be expressed as depth Scheme a variety of different forms such as (Depth Image), 3D point cloud (Point Cloud), geometrical model.In the embodiment of the present application, 3D human face rebuilding can be carried out according to the 3D data to get the 3D morphological image of target is identified to expression.
S831: when 3D human face rebuilding success when, i.e., according to 3D data acquisition to identification target 3D morphological image when, entrance S840。
Optionally, when the success of 3D human face rebuilding, which is stored into storage unit, for example, by 3D point cloud number It stores according to as a 3D point cloud data template into storage unit, forms 3D point cloud data template library.
S832: when the failure of 3D human face rebuilding, i.e., the 3D morphological image of identification target cannot be got according to the 3D data When, restart parameter and adds 1.
S840: judge whether the face infrared image sheared in S821 step belongs to skin detection library.It is optional Ground judges whether there is the people of the User ID by obtaining user identity (Identification, ID) information of infrared image Face feature templates library, when there are the skin detection library of the User ID, into S842: the face infrared image belongs to people Face feature templates library.When the skin detection library of the User ID is not present, into S841: the face infrared image does not belong to In skin detection library.
S8411: when the face infrared image is not belonging to skin detection library, infrared image is handled to obtain Edge feature image, enters step S860.
Optionally, new user's skin detection can also be established according to user's id information of the infrared image of acquisition Library.
S8501: it when the face infrared image belongs to skin detection library, is obtained based on shearing in S821 step Face infrared image carries out template matching.Specific matching process can be identical as step S850.
S851: when template matching success, infrared image is handled to obtain edge feature image, is entered step S860。
S852: when template matching failure, skin detection is not established, parameter is restarted and adds 1.
S860: based at least one edge feature image carry out the anti-fake differentiation of face, with the determination identification target whether For living body faces.
S8711: when the identification target is living body faces, into S8712: judging whether it is available point cloud.
Optionally, by multiple 3D points in the collected 3D point cloud data of human face rebuilding in S830 and 3D point cloud data template library Cloud data template is matched, and available point cloud is judged whether it is.It is then Null Spot cloud when successful match, when it fails to match, It is then available point cloud.Specifically, whether the facial angle of identification target in the 3D point cloud data that point cloud matching is used to judge to acquire Identical as the facial angle in 3D point cloud data template, when angle is identical, successful match, then there are identical in pattern of descriptive parts library The 3D point cloud data of facial angle are then Null Spot cloud;When angle difference, it fails to match, then is not present in pattern of descriptive parts library The 3D point cloud data of identical facial angle are then available point cloud.
Optionally, can also the 3D point cloud data of multiple identification targets be acquired, a cloud and point are carried out in the process Cloud fusion, to form the 3D data and 3D rendering of the comprehensive full angle of face, can carry out 3D recognition of face according to the 3D rendering.
S8713: when judging 3D point cloud data for available point cloud, storage face infrared image is skin detection.Tool Body, the data of face infrared image are stored in storage unit, the face new as one in skin detection library is special Levy template.
S8714: when judging 3D point cloud data for Null Spot cloud, restart parameter and add 1.
Optionally, after judging the 3D point cloud data for available point cloud, can also judge in skin detection library Whether skin detection has expired.
Specifically, judge whether the skin detection quantity in the skin detection library is equal to preset value, if waiting In preset value, then skin detection has been expired, then no longer newly-increased storage skin detection.
For example, the preset value is 8, then when the skin detection quantity in skin detection library is 8, then no longer Newly-increased skin detection.
When skin detection is less than, storage face infrared image is skin detection.Specifically, face is infrared The data of image are stored in storage unit, as a new skin detection in skin detection library.
Optionally, the face registration method 800 further include:
Judgement restarts whether parameter is less than second threshold.If restarting parameter less than second threshold, enter S810;If restarting Parameter is more than or equal to second threshold, then recognition failures.
Above in association with Fig. 2 to Figure 18, the face identification method embodiment of the application is described in detail, below in conjunction with Figure 19, The face identification device embodiment of the application is described in detail, it should be appreciated that Installation practice is corresponded to each other with embodiment of the method, similar Description be referred to embodiment of the method.
Figure 19 is the schematic block diagram according to the face identification device 20 of the embodiment of the present application, comprising:
Image Acquisition mould group 210, for obtaining the first object image of the first identification target;
Processor 220, for carrying out the anti-fake differentiation of face based at least one described first edge characteristic image, with determination Whether the identification target is living body faces, wherein the result of the anti-fake differentiation of face is used for recognition of face.
Optionally, the first object image is two-dimensional infrared image.
Optionally, Image Acquisition mould group 210 can be the device, such as camera, camera etc. of any acquisition image.It can Selection of land, in the embodiment of the present application, Image Acquisition mould group can be infrared camera, for acquiring infrared depth image.It can Selection of land, includes filter plate 211 and light detection array 212 in Image Acquisition mould group 210, and the filter plate 211 is used to penetrate target The optical signal of wavelength, filters out the optical signal of non-targeted wavelength, and the light detection array 212 is based on the target wavelength and carries out light inspection It surveys, and the optical signal that will test is converted to electric signal.Optionally, the light detection array 212 include multiple pixel units, one Pixel unit forms a pixel value in a recognition target image for transmitting photo-signal.Optionally, the pixel unit Photodiode (photo diode), metal oxide semiconductor field effect tube (Metal Oxide can be used Semiconductor Field Effect Transistor, MOSFET) etc. devices.Optionally, the pixel unit is for mesh Wavelength light luminous sensitivity with higher and higher quantum efficiency are marked, in order to detect the optical signal of respective wavelength.
Specifically, in the embodiment of the present application, the target wavelength belongs to infrared band, for example, target wavelength is 940nm, then filter plate 211 is used for the infrared signal through 940nm, stops visible light, other infrared lights of non-940nm wavelength Pass through, light detection array 212 is infrared light detection array, is detected and is formed corresponding to identification mesh to the infrared light of 940nm Target depth image.
Optionally, the processor 220 can be the processor of the face identification device 20, or including face The processor of the electronic equipment of identification device 20, the embodiment of the present application is without limitation.
Optionally, the processor 220 is also used to: carrying out two-dimentional identification based on the first object image;
When two dimension identifies successfully, the processor 220 is specifically used for: being handled to obtain to the first object image At least one described first edge image;
The processor 220 is also used to: when the first identification target is living body faces, determining recognition of face success;
Alternatively, determining that recognition of face fails when the first identification target is non-living body face.
Optionally, the processor 220 is also used to: when the first identification target is living body faces, based on described the One target image carries out two-dimentional identification;
When two dimension identifies successfully, recognition of face success is determined, alternatively, determining recognition of face in two-dimentional recognition failures Failure;
Alternatively, determining that recognition of face fails when the first identification target is non-living body face.
Optionally, the processor 220 is specifically used for: obtaining the first facial image to the first object image cut;
First facial image is matched with multiple feature templates, when successful match, two dimension is identified successfully, or Person, when it fails to match, two-dimentional recognition failures.
Optionally, the processor 220 is specifically used for: being carried out using multiple low pass convolution kernels to the first object image Convolutional calculation obtains multiple first characteristics of low-frequency images;
By two the first characteristics of low-frequency image subtractions different in the multiple first characteristics of low-frequency image obtain it is described extremely A first edge characteristic image in a few first edge characteristic image.
Optionally, the low pass convolution kernel is Gaussian convolution core, and the first edge characteristic image is Gaussian function difference DOG image.
Optionally, the processor 220 is specifically used for: reduce at least one described first edge characteristic image To at least one first object edge feature image, it is anti-that face is carried out based at least one described first object edge feature image Puppet differentiates.
Optionally, the processor 220 is specifically used for: by convolutional neural networks at least one described first object side Whether edge characteristic image carries out classification processing, be living body faces with determination the first identification target.
Optionally, the convolutional neural networks include: at least one convolutional layer, at least one excitation layer and at least one Full articulamentum.
Optionally, the processor 220 is specifically used for: by least one described convolutional layer, to it is described at least one One object edge characteristic image carries out convolutional calculation and obtains multiple characteristic patterns;
By at least one described excitation layer, nonlinear processing is carried out to the multiple characteristic pattern and obtains multiple sparse spies Sign figure;
By at least one described full articulamentum, it is normal that multiple features are obtained to the full connection of the multiple sparse features figure progress Number;And classification processing is carried out to the multiple characteristic constant using classification function.
Optionally, at least one described convolutional layer, at least one excitation layer number be 2, it is described that at least one is complete The number of articulamentum is 1.
Optionally, at least one described convolutional layer includes the first convolutional layer and the second convolutional layer, first convolutional layer Convolution step-length is 1, and the convolution step-length of second convolutional layer is 2.
Optionally, the convolution kernel size at least one described convolutional layer be 3*3 matrix and/or it is described at least one Excitation function in excitation layer is in parametrization amendment linear unit PReLU function and/or at least one described full articulamentum Classification function is Sigmoid function.
Optionally, the processor 220 is also used to:
Obtain the second target image of the second identification target;
Second target image is handled to obtain at least one second edge characteristic image;
The anti-fake differentiation of face is carried out based at least one described second edge characteristic image, with determination the second identification mesh Whether mark is living body faces, wherein the result of the anti-fake differentiation of face is for establishing skin detection.
Optionally, second target image is the second infrared image.
Optionally, the processor 220 is also used to: establishing the skin detection based on second target image.
Optionally, the processor 220 is also used to: carrying out Face datection based on second target image;
Wherein, described to establish skin detection based on second target image and include:
In Face datection success, facial image is carried out to second target image and shears to form the second facial image, The skin detection is established based on second facial image.
Optionally, the processor 220 is specifically used for: judging whether second facial image belongs to skin detection Library;
When second facial image belongs to the skin detection library, by second facial image and the people Multiple skin detections in face feature templates library are matched.
When second facial image is not belonging to the skin detection library, based at least one described second edge Characteristic image carries out the anti-fake differentiation of face, when determining the second identification target is living body faces, by the second face figure As being established as skin detection.
Optionally, when successful match, the processor 220 is specifically used for: special based at least one described second edge It levies image and carries out the anti-fake differentiation of face;
When determining the second identification target is living body faces, second facial image is established as face characteristic mould Plate.
Optionally, described when successful match, the processor 220 is specifically used for: obtaining the second identification target 3D point cloud data;
When the 3D point cloud data are available point cloud, face is carried out based at least one described second edge characteristic image Anti-fake differentiation.
The processor 220 is specifically used for: carrying out convolution meter to second target image using multiple low pass convolution kernels Calculation obtains multiple second characteristics of low-frequency images;
By two the second characteristics of low-frequency image subtractions different in the multiple second characteristics of low-frequency image obtain it is described extremely A second edge characteristic image in a few second edge characteristic image.
Optionally, the low pass convolution kernel is Gaussian convolution core.
The processor 220 is specifically used for: being reduced to obtain at least at least one described second edge characteristic image One the second object edge characteristic image, carries out that face is anti-fake sentences based at least one described second object edge characteristic image Not.
The processor 220 is specifically used for: by convolutional neural networks at least one described second object edge feature Whether image carries out classification processing, be living body faces with determination the second identification target.
Optionally, the convolutional neural networks include: two convolutional layers, two excitation layers and a full articulamentum.
Optionally, the convolution kernel in described two convolutional layers is the matrix of 3*3, and convolution step-length is respectively 1 and 2;And/or
Excitation function in described two excitation layers is parametrization amendment linear unit PReLU function;And/or
Classification function in one full articulamentum is Sigmoid function.
As shown in figure 20, the embodiment of the present application also provides a kind of electronic equipment 2, which may include above-mentioned Apply for the face identification device 20 of embodiment.
For example, electronic equipment 2 is that intelligent door lock, mobile phone, computer, access control system etc. need setting using recognition of face It is standby.The face identification device 20 includes the software and hardware device that recognition of face is used in electronic equipment 2.
It should be understood that the processor of the embodiment of the present application can be a kind of IC chip, the processing capacity with signal. During realization, each step of above method embodiment can be by the integrated logic circuit of the hardware in processor or soft The instruction of part form is completed.Above-mentioned processor can be general processor, digital signal processor (Digital Signal Processor, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing At programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete Door or transistor logic, discrete hardware components.It may be implemented or execute the disclosed each side in the embodiment of the present application Method, step and logic diagram.General processor can be microprocessor or the processor is also possible to any conventional processing Device etc..The step of method in conjunction with disclosed in the embodiment of the present application, can be embodied directly in hardware decoding processor and execute completion, Or in decoding processor hardware and software module combination execute completion.Software module can be located at random access memory, dodge It deposits, read-only memory, this fields such as programmable read only memory or electrically erasable programmable memory, register are mature to deposit In storage media.The storage medium is located at memory, and processor reads the information in memory, completes the above method in conjunction with its hardware The step of.
It is appreciated that the recognition of face of the embodiment of the present application can also include memory, memory can be volatibility and deposit Reservoir or nonvolatile memory, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can Be read-only memory (Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), Erasable Programmable Read Only Memory EPROM (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), it is used as External Cache.By exemplary but be not restricted explanation, many forms RAM is available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), Double Data Rate synchronous dynamic Random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links dynamic random access memory (Synchlink DRAM, SLDRAM) and straight Meet rambus random access memory (Direct Rambus RAM, DR RAM).It should be noted that system described herein and side The memory of method is intended to include but is not limited to the memory of these and any other suitable type.
The embodiment of the present application also proposed a kind of computer readable storage medium, the computer-readable recording medium storage one A or multiple programs, the one or more program include instruction, and the instruction is when by the portable electronic including multiple application programs When equipment executes, method that the portable electronic device can be made to execute Fig. 1-18 illustrated embodiment.
The embodiment of the present application also proposed a kind of computer program, which includes instruction, when the computer journey When sequence is computer-executed, the method that allows computer to execute Fig. 1-18 illustrated embodiment.
The embodiment of the present application also provides a kind of chip, which includes input/output interface, at least one processor, extremely A few memory and bus, for storing instruction, at least one processor is for calling this extremely for at least one processor Instruction in a few memory, the method to execute Fig. 1-18 illustrated embodiment.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that arriving, disclosed systems, devices and methods can To realize by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit Division, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or group Part can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown Or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit it is indirect Coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, institute Computer software product is stated to be stored in a storage medium, including some instructions are used so that computer equipment (can be with It is personal computer, server or the network equipment etc.) execute all or part of step of each embodiment the method for the application Suddenly.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), deposits at random The various media that can store program code such as access to memory (Random Access Memory, RAM), magnetic or disk.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.

Claims (28)

1. a kind of method of recognition of face characterized by comprising
Obtain the first object image of the first identification target;
The first object image is handled to obtain at least one first edge characteristic image;
Determine whether the first identification target is living body faces based at least one described first edge characteristic image, and exports Living body judging result;
Feature templates matching is carried out according to the first object image, and exports matching result;
Face recognition result is exported according to the living body judging result and the matching result.
2. the method according to claim 1, wherein described tie according to the living body judging result and the matching Fruit exports face recognition result, comprising:
When the matching result is successfully, face recognition result is exported according to the living body judging result;Alternatively, in the work When body judging result is living body, face recognition result is exported according to the matching result;Alternatively, being failure in the matching result Or the living body judging result be non-living body when, export face recognition result.
3. method according to claim 1 or 2, which is characterized in that described to carry out feature according to the first object image Template matching, and export matching result, comprising:
Face datection is carried out based on the first object image;
When Face datection success, the first facial image is obtained based on the first object image;
First facial image is matched with the multiple feature templates prestored;
When any one feature templates successful match in first facial image and the multiple feature templates, output matching It as a result is successfully;Alternatively,
When it fails to match for first facial image and the multiple feature templates, output matching result is failure;
Alternatively, output matching result is failure when Face datection failure.
4. method according to any one of claim 1-3, which is characterized in that the first object image is two-dimensional infrared Image.
5. method according to any of claims 1-4, which is characterized in that described to be carried out to the first object image Processing obtains at least one first edge characteristic image, comprising:
Convolutional calculation is carried out to the first object image using multiple low pass convolution kernels and obtains multiple first characteristics of low-frequency images;
Two the first characteristics of low-frequency image subtractions different in the multiple first characteristics of low-frequency image are obtained described at least one A first edge characteristic image in a first edge characteristic image.
6. according to the method described in claim 5, it is characterized in that, the low pass convolution kernel be Gaussian convolution core, described first Edge feature image is Gaussian function difference DOG image.
7. method according to claim 1 to 6, which is characterized in that described based at least one described first side Edge characteristic image determines whether the first identification target is living body faces, comprising:
At least one described first edge characteristic image is reduced to obtain at least one first object edge feature image, base Determine whether the first identification target is living body faces at least one described first object edge feature image.
8. the method according to the description of claim 7 is characterized in that described based at least one described first object edge feature Image determines whether the first identification target is living body faces, comprising:
Classification processing is carried out at least one described first object edge feature image by convolutional neural networks, described in determination Whether the first identification target is living body faces.
9. according to the method described in claim 8, it is characterized in that, the convolutional neural networks include: at least one convolutional layer, At least one excitation layer and at least one full articulamentum.
10. according to the method described in claim 9, it is characterized in that, it is described by convolutional neural networks to it is described at least one First object edge feature image carries out classification processing, comprising:
By at least one described convolutional layer, convolutional calculation is carried out at least one described first object edge feature image and is obtained Multiple characteristic patterns;
By at least one described excitation layer, nonlinear processing is carried out to the multiple characteristic pattern and obtains multiple sparse features Figure;
By at least one described full articulamentum, full connection is carried out to the multiple sparse features figure and obtains multiple characteristic constants; And classification processing is carried out to the multiple characteristic constant using classification function.
11. method according to claim 9 or 10, which is characterized in that at least one described convolutional layer includes two convolution Layer, at least one described excitation layer include two excitation layers, at least one described full articulamentum includes a full articulamentum.
12. the method according to any one of claim 9-11, which is characterized in that at least one described convolutional layer includes the One convolutional layer and the second convolutional layer, the convolution step-length of first convolutional layer are 1, and the convolution step-length of second convolutional layer is 2.
13. the method according to any one of claim 9-12, which is characterized in that the volume at least one described convolutional layer The excitation function in matrix and/or at least one described excitation layer that product core size is 3*3 is parametrization amendment linear unit Classification function in PReLU function and/or at least one described full articulamentum is Sigmoid function.
14. method according to claim 1 to 13, which is characterized in that the method also includes:
Obtain the second target image of the second identification target;
Second target image is handled to obtain at least one second edge characteristic image;
The anti-fake differentiation of face is carried out based at least one described second edge characteristic image, is with determination the second identification target No is living body faces, wherein the result of the anti-fake differentiation of face is for establishing skin detection.
15. according to the method for claim 14, which is characterized in that second target image is the second infrared image.
16. method according to claim 14 or 15, which is characterized in that the method also includes:
The skin detection is established based on second target image.
17. according to the method for claim 16, which is characterized in that the method also includes:
Face datection is carried out based on second target image;
Wherein, described to establish skin detection based on second target image and include:
In Face datection success, facial image is carried out to second target image and shears to form the second facial image, is based on Second facial image establishes the skin detection.
18. according to the method for claim 17, which is characterized in that described to establish the people based on second facial image Face feature templates, comprising:
Judge whether second facial image belongs to skin detection library;
It is when second facial image belongs to the skin detection library, second facial image and the face is special Multiple skin detections in sign template library are matched;
When second facial image is not belonging to the skin detection library, based at least one described second edge feature Image carries out the anti-fake differentiation of face, and when determining the second identification target is living body faces, second facial image is built It stands as skin detection.
19. according to the method for claim 18, which is characterized in that described that second facial image and the face is special Multiple skin detections in sign template library are matched, comprising:
When successful match, the anti-fake differentiation of face is carried out based at least one described second edge characteristic image;
When determining the second identification target is living body faces, second facial image is established as skin detection.
20. according to the method for claim 19, which is characterized in that it is described when successful match, based on it is described at least one Second edge characteristic image carries out the anti-fake differentiation of face, comprising:
When successful match, the 3D point cloud data of the second identification target are obtained;
When the 3D point cloud data are available point cloud, it is anti-fake that face is carried out based at least one described second edge characteristic image Differentiate.
21. method described in any one of 4-20 according to claim 1, described to be handled to obtain to second target image At least one second edge characteristic image, comprising:
Convolutional calculation is carried out to second target image using multiple low pass convolution kernels and obtains multiple second characteristics of low-frequency images;
Two the second characteristics of low-frequency image subtractions different in the multiple second characteristics of low-frequency image are obtained described at least one A second edge characteristic image in a second edge characteristic image.
22. according to the method for claim 21, which is characterized in that the low pass convolution kernel is Gaussian convolution core, described the Two edge feature images are Gaussian function difference DOG image.
23. method described in any one of 4-22 according to claim 1, which is characterized in that it is described based on it is described at least one the Two edge feature images carry out the anti-fake differentiation of face, comprising:
At least one described second edge characteristic image is reduced to obtain at least one second object edge characteristic image, base The anti-fake differentiation of face is carried out at least one described second object edge characteristic image.
24. according to the method for claim 23, which is characterized in that described special based at least one described second object edge It levies image and carries out the anti-fake differentiation of face, comprising:
Classification processing is carried out at least one described second object edge characteristic image by convolutional neural networks, described in determination Whether the second identification target is living body faces.
25. according to the method for claim 24, which is characterized in that the convolutional neural networks include: two convolutional layers, and two A excitation layer and a full articulamentum.
26. method described in any one of 4-25 according to claim 1, which is characterized in that the convolution in described two convolutional layers Core is the matrix of 3*3, and convolution step-length is respectively 1 and 2;And/or
Excitation function in described two excitation layers is parametrization amendment linear unit PReLU function;And/or
Classification function in one full articulamentum is Sigmoid function.
27. a kind of device of recognition of face characterized by comprising processor;
The processor is for executing: the method for the recognition of face as described in any one of claim 1 to 26.
28. a kind of electronic equipment characterized by comprising
The device of recognition of face as claimed in claim 27.
CN201980001102.6A 2019-06-27 2019-06-27 The method, apparatus and electronic equipment of recognition of face Pending CN110520865A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/093161 WO2020258120A1 (en) 2019-06-27 2019-06-27 Face recognition method and device, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN110520865A true CN110520865A (en) 2019-11-29

Family

ID=68634393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980001102.6A Pending CN110520865A (en) 2019-06-27 2019-06-27 The method, apparatus and electronic equipment of recognition of face

Country Status (2)

Country Link
CN (1) CN110520865A (en)
WO (1) WO2020258120A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582045A (en) * 2020-04-15 2020-08-25 深圳市爱深盈通信息技术有限公司 Living body detection method and device and electronic equipment
CN111666884A (en) * 2020-06-08 2020-09-15 睿云联(厦门)网络通讯技术有限公司 Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
WO2021135639A1 (en) * 2019-12-30 2021-07-08 支付宝实验室(新加坡)有限公司 Living body detection method and apparatus

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378715B (en) * 2021-06-10 2024-01-05 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113469054A (en) * 2021-07-02 2021-10-01 哈尔滨理工大学 Infrared human face recognition method based on deep learning
CN114596535B (en) * 2022-03-22 2023-02-03 天目爱视(北京)科技有限公司 Non-contact doorbell visit processing method and related equipment
CN116386118B (en) * 2023-04-17 2024-04-05 广州番禺职业技术学院 Drama matching cosmetic system and method based on human image recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886301A (en) * 2014-03-28 2014-06-25 中国科学院自动化研究所 Human face living detection method
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN109543635A (en) * 2018-11-29 2019-03-29 北京旷视科技有限公司 Biopsy method, device, system, unlocking method, terminal and storage medium
CN109685018A (en) * 2018-12-26 2019-04-26 深圳市捷顺科技实业股份有限公司 A kind of testimony of a witness method of calibration, system and relevant device
CN109858337A (en) * 2018-12-21 2019-06-07 普联技术有限公司 A kind of face identification method based on pupil information, system and equipment
CN109858381A (en) * 2019-01-04 2019-06-07 深圳壹账通智能科技有限公司 Biopsy method, device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018193311A1 (en) * 2017-04-18 2018-10-25 Hushchyn Yury Dynamic real-time generation of three-dimensional avatar models of users based on live visual input of users' appearance and computer systems and computer-implemented methods directed to thereof
CN109460733A (en) * 2018-11-08 2019-03-12 北京智慧眼科技股份有限公司 Recognition of face in-vivo detection method and system based on single camera, storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886301A (en) * 2014-03-28 2014-06-25 中国科学院自动化研究所 Human face living detection method
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN109543635A (en) * 2018-11-29 2019-03-29 北京旷视科技有限公司 Biopsy method, device, system, unlocking method, terminal and storage medium
CN109858337A (en) * 2018-12-21 2019-06-07 普联技术有限公司 A kind of face identification method based on pupil information, system and equipment
CN109685018A (en) * 2018-12-26 2019-04-26 深圳市捷顺科技实业股份有限公司 A kind of testimony of a witness method of calibration, system and relevant device
CN109858381A (en) * 2019-01-04 2019-06-07 深圳壹账通智能科技有限公司 Biopsy method, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李翼: "应用于人脸识别中的反照片欺骗检测方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 29 - 30 *
李雷达: "《图像质量评价中的特征提取方法与应用》", 中国矿业大学出版社, pages: 74 - 75 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135639A1 (en) * 2019-12-30 2021-07-08 支付宝实验室(新加坡)有限公司 Living body detection method and apparatus
CN111582045A (en) * 2020-04-15 2020-08-25 深圳市爱深盈通信息技术有限公司 Living body detection method and device and electronic equipment
CN111666884A (en) * 2020-06-08 2020-09-15 睿云联(厦门)网络通讯技术有限公司 Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
CN111666884B (en) * 2020-06-08 2023-08-25 睿云联(厦门)网络通讯技术有限公司 Living body detection method, living body detection device, computer readable medium and electronic equipment

Also Published As

Publication number Publication date
WO2020258120A1 (en) 2020-12-30

Similar Documents

Publication Publication Date Title
CN110462633A (en) A kind of method, apparatus and electronic equipment of recognition of face
CN110520865A (en) The method, apparatus and electronic equipment of recognition of face
CN110383288A (en) The method, apparatus and electronic equipment of recognition of face
Wang et al. Research on face recognition based on deep learning
CN105518709B (en) The method, system and computer program product of face for identification
CN110462632A (en) The method, apparatus and electronic equipment of recognition of face
CN101558431B (en) Face authentication device
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN101999900B (en) Living body detecting method and system applied to human face recognition
CN106446779B (en) Personal identification method and device
Kashem et al. Face recognition system based on principal component analysis (PCA) with back propagation neural networks (BPNN)
CN108416307A (en) A kind of Aerial Images road surface crack detection method, device and equipment
CN110516576A (en) Near-infrared living body faces recognition methods based on deep neural network
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN106446872A (en) Detection and recognition method of human face in video under low-light conditions
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN109271895A (en) Pedestrian&#39;s recognition methods again based on Analysis On Multi-scale Features study and Image Segmentation Methods Based on Features
CN110263768A (en) A kind of face identification method based on depth residual error network
CN109948467A (en) Method, apparatus, computer equipment and the storage medium of recognition of face
CN114862837A (en) Human body security check image detection method and system based on improved YOLOv5s
CN110263670A (en) A kind of face Local Features Analysis system
JP2005316888A (en) Face recognition system
CN114565448A (en) Loan risk information mining method based on video identification
Sakthimohan et al. Detection and Recognition of Face Using Deep Learning
CN106156739A (en) A kind of certificate photo ear detection analyzed based on face mask and extracting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination